stephen hawking and elon musk
Could Artificial Intelligence REALLY Wipe Out Humanity? - AI Summary
Even staple figures in the field of science such as Stephen Hawking and Elon Musk have been vocal about technology's threat against humanity. The facts of the matter are that machines generally operate how they're programmed to and we are a long way from developing the ASI (artificial superintelligence) needed for this "takeover" to even be feasible. At present, most of the AI technology utilized by machines is considered "narrow" or "weak," meaning it can only apply its knowledge towards one or a few tasks. "Machine learning and AI systems are a long way from cracking the hard problem of consciousness and being able to generate their own goals contrary to their programming," George Montanez, a data scientist at Microsoft, wrote under the same Metafact thread. Some of these risks include overoptimization, weaponization, and ecological collapse, according to Ben Nye, the Director of Learning Sciences at the University of Southern California, Institute for Creative Technologies (USC-ICT).
AI drone may have 'hunted down' and killed soldiers in Libya without human input
AI drone may have'hunted down' and killed soldiers in Libya without human input By Charles Q. Choi - Live Science Contributor - June 3, 2021 KARGU a Rotary Wing Attack Drone Loitering Munition System A UN report suggests that at least one autonomous drone operated by artificial intelligence (AI) may have killed people for the first time last year in Libya, without any humans consulted prior to the attack, according to a U.N. report. According to a March report from the U.N. Panel of Experts on Libya, lethal autonomous aircraft may have "hunted down and remotely engaged" soldiers and convoys fighting for Libyan general Khalifa Haftar. It's not clear who exactly deployed these killer robots, though remnants of one such machine found in Libya came from the Kargu-2 drone, which is made by Turkish military contractor STM. Landmines are essentially simple autonomous weapons -- you step on them and they blow up," Zachary Kallenborn, a research affiliate with the National Consortium for the ...
- Africa > Middle East > Libya (1.00)
- Asia > Middle East > Republic of Türkiye (0.36)
- North America > United States > Maryland > Prince George's County > College Park (0.05)
- Asia > Middle East > Saudi Arabia (0.05)
Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI
In 2012, Michael Vassar became the chief science officer of MetaMed Research, which he co-founded, and prior to that, he served as the president of the Machine Intelligence Research Institute. Clearly, he knows a thing or two about artificial intelligence (AI), and now, he has come out with a stark warning for humanity when it comes to the development of artificial super-intelligence. In a video posted by Big Think, Vassar states, "If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order." Essentially, he is warning that an unchecked AI could eradicate humanity in the future. Vassar's views are based on the writings of Nick Bostrom, most specifically, those found in his book "Superintelligence."
Stephen Hawking and Elon Musk backed 23 principles to ensure humanity benefits from AI
Cosmologist Stephen Hawking and Tesla CEO Elon Musk endorsed a set of principles this week that have been established to ensure that self-thinking machines remain safe and act in humanity's best interests. Machines are getting more intelligent every year and researchers believe they could possess human levels of intelligence in the coming decades. Once they reach this point they could then start to improve themselves and create other, even more powerful AIs, known as superintelligences, according to Oxford philosopher Nick Bostrom and several others in the field. In 2014, Musk, who has his own $1 billion AI research company, warned that AI has the potential to be "more dangerous than nukes" while Hawking said in December 2014 that AI could end humanity. But there are two sides to the coin.
- Europe > Estonia > Harju County > Tallinn (0.06)
- North America > United States > California (0.05)
- Media > Film (0.30)
- Leisure & Entertainment (0.30)
- Information Technology (0.30)
Stephen Hawking and Elon Musk backed 23 principles to ensure humanity benefits from AI
Cosmologist Stephen Hawking and Tesla CEO Elon Musk endorsed a set of principles that have been established to ensure that self-thinking machines remain safe and act in humanity's best interests. Machines are getting more intelligent every year and researchers believe they could possess human levels of intelligence in the coming decades. Once they reach this point they could then start to improve themselves and create even more powerful software, according to Oxford philosopher Nick Bostrom and several others in the field. In 2014, Musk warned that artificial intelligence has the potential to be "more dangerous than nukes" while Hawking said in December 2014 that AI could end humanity. AI could also help to cure cancer and slow down global warming.
Your next boss: A computer algorithm?
Computers keep getting smaller and faster. That's been happening for decades. But almost all of them are programmed to do what humans want them to do, the way humans want them to do it, and nothing more. Now computers are beginning to learn -- on their own. Years of research into artificial intelligence are beginning to pay off.
Artificial Intelligence Will Drive Consolidation in 2017
If the last few weeks have taught us anything, it's that predictions, more often than not, are incorrect. With that simple statement I turn my attention to 2017 and what may or may not happen. Looking into next year is likely looking into a foggy crystal ball -- everything's a little murky, but you have an idea of where things are headed. The biggest topic of the coming year is artificial intelligence. Everyone from Stephen Hawking and Elon Musk to Tom Cruise and Kanye West is talking about A.I.
Eric Schmidt dismissed the AI fears raised by Stephen Hawking and Elon Musk
Google executive chairman Eric Schmidt has questioned whether renowned scientist Stephen Hawking and SpaceX billionaire Elon Musk are in a position to accurately predict the future of artificial intelligence. Hawking told the BBC in 2014 that AI could end mankind, while Musk tweeted that same year that AI could be more dangerous than nuclear weapons after reading a book called "Superintelligence." Schmidt was asked at the Brilliant Minds conference in Stockholm on Thursday what he made of their predictions. In response, he said: "In the case of Stephen Hawking, although a brilliant man, he's not a computer scientist. Elon [Musk] is also a brilliant man, though he too is a physicist, not a computer scientist."